Goto

Collaborating Authors

 epistemic state


Operationalising Extended Cognition: Formal Metrics for Corporate Knowledge and Legal Accountability

Perrier, Elija

arXiv.org Artificial Intelligence

Corporate responsibility turns on notions of corporate \textit{mens rea}, traditionally imputed from human agents. Yet these assumptions are under challenge as generative AI increasingly mediates enterprise decision-making. Building on the theory of extended cognition, we argue that in response corporate knowledge may be redefined as a dynamic capability, measurable by the efficiency of its information-access procedures and the validated reliability of their outputs. We develop a formal model that captures epistemic states of corporations deploying sophisticated AI or information systems, introducing a continuous organisational knowledge metric $S_S(φ)$ which integrates a pipeline's computational cost and its statistically validated error rate. We derive a thresholded knowledge predicate $\mathsf{K}_S$ to impute knowledge and a firm-wide epistemic capacity index $\mathcal{K}_{S,t}$ to measure overall capability. We then operationally map these quantitative metrics onto the legal standards of actual knowledge, constructive knowledge, wilful blindness, and recklessness. Our work provides a pathway towards creating measurable and justiciable audit artefacts, that render the corporate mind tractable and accountable in the algorithmic age.


Scaling Multi-Agent Epistemic Planning through GNN-Derived Heuristics

Briglia, Giovanni, Fabiano, Francesco, Mariani, Stefano

arXiv.org Artificial Intelligence

Multi-agent Epistemic Planning (MEP) is an autonomous planning framework for reasoning about both the physical world and the beliefs of agents, with applications in domains where information flow and awareness among agents are critical. The richness of MEP requires states to be represented as Kripke structures, i.e., directed labeled graphs. This representation limits the applicability of existing heuristics, hindering the scalability of epistemic solvers, which must explore an exponential search space without guidance, resulting often in intractability. To address this, we exploit Graph Neural Networks (GNNs) to learn patterns and relational structures within epistemic states, to guide the planning process. GNNs, which naturally capture the graph-like nature of Kripke models, allow us to derive meaningful estimates of state quality -- e.g., the distance from the nearest goal -- by generalizing knowledge obtained from previously solved planning instances. We integrate these predictive heuristics into an epistemic planning pipeline and evaluate them against standard baselines, showing improvements in the scalability of multi-agent epistemic planning.


A General Framework of Epistemic Forgetting and its Instantiation by Ranking Functions

Beierle, Christoph, Hahn, Alexander, Howey, Diana, Kern-Isberner, Gabriele, Sauerwald, Kai

arXiv.org Artificial Intelligence

Forgetting as a knowledge management operation deliberately ignores parts of the knowledge and beliefs of an agent, for various reasons. Forgetting has many facets, one may want to forget parts of the syntax, a proposition, or a conditional. In the literature, two main operators suitable for performing forgetting have been proposed and investigated in depth: First, variable elimination is a syntactical method that blends out certain atomic variables to focus on the rest of the language. It has been mainly used in the area of logic programming and answer set programming. Second, contraction in AGM belief revision theory effectively removes propositions from belief sets under logical deduction. Both operations rely mainly on classical logics. In this article, we take an epistemic perspective and study forgetting operations in epistemic states with richer semantic structures, but with clear links to propositional logic. This allows us to investigate what forgetting in the epistemic background means, thereby lifting well-known and novel forgetting operations to the epistemic level. We present five general types of epistemic forgetting and instantiate them with seven concrete forgetting operations for Spohn's ranking functions. We take inspiration from postulates of forgetting both from logic programming and AGM theory to propose a rich landscape of axioms for evaluating forgetting operations. Finally, we evaluate all concrete forgetting operations according to all postulates, leading to a novel comprehensive overview highlighting differences and commonalities among the forgetting operators.


On Definite Iterated Belief Revision with Belief Algebras

Meng, Hua, Long, Zhiguo, Sioutis, Michael, Zhou, Zhengchun

arXiv.org Artificial Intelligence

Traditional logic-based belief revision research focuses on designing rules to constrain the behavior of revision operators. Frameworks have been proposed to characterize iterated revision rules, but they are often too loose, leading to multiple revision operators that all satisfy the rules under the same belief condition. In many practical applications, such as safety critical ones, it is important to specify a definite revision operator to enable agents to iteratively revise their beliefs in a deterministic way. In this paper, we propose a novel framework for iterated belief revision by characterizing belief information through preference relations. Semantically, both beliefs and new evidence are represented as belief algebras, which provide a rich and expressive foundation for belief revision. Building on traditional revision rules, we introduce additional postulates for revision with belief algebra, including an upper-bound constraint on the outcomes of revision. We prove that the revision result is uniquely determined given the current belief state and new evidence. Furthermore, to make the framework more useful in practice, we develop a particular algorithm for performing the proposed revision process. We argue that this approach may offer a more predictable and principled method for belief revision, making it suitable for real-world applications.


Knowledge in multi-robot systems: an interplay of dynamics, computation and communication

Cignarale, Giorgio, Felber, Stephan, Goubault, Eric, Flores, Bernardo Hummes, Galeana, Hugo Rincon

arXiv.org Artificial Intelligence

We show that the hybrid systems perspective of distributed multi-robot systems is compatible with logical models of knowledge already used in distributed computing, and demonstrate its usefulness by deriving sufficient epistemic conditions for exploration and gathering robot tasks to be solvable. We provide a separation of the physical and computational aspects of a robotic system, allowing us to decouple the problems related to each and directly use methods from control theory and distributed computing, fields that are traditionally distant in the literature. Finally, we demonstrate a novel approach for reasoning about the knowledge in multi-robot systems through a principled method of converting a switched hybrid dynamical system into a temporal-epistemic logic model, passing through an abstract state machine representation. This creates space for methods and results to be exchanged across the fields of control theory, distributed computing and temporal-epistemic logic, while reasoning about multi-robot systems.


What killed the cat? Towards a logical formalization of curiosity (and suspense, and surprise) in narratives

de Saint-Cyr, Florence Dupin, Bosser, Anne-Gwenn, Callac, Benjamin, Maisel, Eric

arXiv.org Artificial Intelligence

Humans tell stories to make sense of the world and communicate their understanding of what happens. Storytelling supposes to be able to sort out which events are worth telling, deciding on a level of detail for describing events, selecting among possible causes the ones which are deemed worth telling. It also supposes to make use of an affective machinery for capturing an audience's attention (emotional contagion, suspense elicitation...). In the act of storytelling, structural and affective phenomena are thus combined with communicative goals in mind. This combination has indeed shown its effectiveness in this respect: the phenomenon of narrative transportation (the experience of being immersed in a story) has been linked to persuasion [27]. The narrative paradigm therefore provides an appropriate framework, in which causal reasoning about the situations narrated [53] is combined with narrative devices to encourage the audience's emotional involvement [51], to study and model how opinion is formed and evolves. Building a framework for reasoning about and unveiling storytelling mechanics could pave the way for intellectual selfdefense supporting tools, enabling citizens to arm themselves against hostile disinformation or influence campaigns. Previous works in structural narratology have studied the way stories are conveyed to their audience and seminal work from (for instance) Genette [25] or Propp [45] have previously served as the backbone inspiration for computational narrative models and storytelling systems [43].


The Challenges of Effective AGM Belief Contraction

Klumpp, Dominik, Ribeiro, Jandson S.

arXiv.org Artificial Intelligence

Despite the significant interest in extending the AGM paradigm of belief change beyond finitary logics, the computational aspects of AGM have remained almost untouched. We investigate the computability of AGM contraction on non-finitary logics, and show an intriguing negative result: there are infinitely many uncomputable AGM contraction functions in such logics. Drastically, even if we restrict the theories used to represent epistemic states, in all non-trivial cases, the uncomputability remains. On the positive side, we identify an infinite class of computable AGM contraction functions on Linear Temporal Logic (LTL). We use B\"uchi automata to construct such functions as well as to represent and reason about LTL knowledge.


Credibility-Limited Revision for Epistemic Spaces

Sauerwald, Kai

arXiv.org Artificial Intelligence

We consider credibility-limited revision in the framework of belief change for epistemic spaces, permitting inconsistent belief sets and inconsistent beliefs. In this unrestricted setting, the class of credibility-limited revision operators does not include any AGM revision operators. We extend the class of credibility-limited revision operators in a way that all AGM revision operators are included while keeping the original spirit of credibility-limited revision. Extended credibility-limited revision operators are defined axiomatically. A semantic characterization of extended credibility-limited revision operators that employ total preorders on possible worlds is presented.


The Realizability of Revision and Contraction Operators in Epistemic Spaces

Sauerwald, Kai, Thimm, Matthias

arXiv.org Artificial Intelligence

This paper studies the realizability of belief revision and belief contraction operators in epistemic spaces. We observe that AGM revision and AGM contraction operators for epistemic spaces are only realizable in precisely determined epistemic spaces. We define the class of linear change operators, a special kind of maxichoice operator. When AGM revision, respectively, AGM contraction, is realizable, linear change operators are a canonical realization.


Depth-Bounded Epistemic Planning

Bolander, Thomas, Burigana, Alessandro, Montali, Marco

arXiv.org Artificial Intelligence

In this paper, we propose a novel algorithm for epistemic planning based on dynamic epistemic logic (DEL). The novelty is that we limit the depth of reasoning of the planning agent to an upper bound b, meaning that the planning agent can only reason about higher-order knowledge to at most (modal) depth b. The algorithm makes use of a novel type of canonical b-bisimulation contraction guaranteeing unique minimal models with respect to b-bisimulation. We show our depth-bounded planning algorithm to be sound. Additionally, we show it to be complete with respect to planning tasks having a solution within bound b of reasoning depth (and hence the iterative bound-deepening variant is complete in the standard sense). For bound b of reasoning depth, the algorithm is shown to be (b + 1)-EXPTIME complete, and furthermore fixed-parameter tractable in the number of agents and atoms. We present both a tree search and a graph search variant of the algorithm, and we benchmark an implementation of the tree search version against a baseline epistemic planner.